We analyze the convergence behaviour of a recently proposed algorithm forregularized estimation called Dual Augmented Lagrangian (DAL). Our analysis isbased on a new interpretation of DAL as a proximal minimization algorithm. Wetheoretically show under some conditions that DAL converges super-linearly in anon-asymptotic and global sense. Due to a special modelling of sparseestimation problems in the context of machine learning, the assumptions we makeare milder and more natural than those made in conventional analysis ofaugmented Lagrangian algorithms. In addition, the new interpretation enables usto generalize DAL to wide varieties of sparse estimation problems. Weexperimentally confirm our analysis in a large scale $\ell_1$-regularizedlogistic regression problem and extensively compare the efficiency of DALalgorithm to previously proposed algorithms on both synthetic and benchmarkdatasets.
展开▼